201 research outputs found

    Automatic parameter tuning for Class-Based Virtual Machine Placement in cloud infrastructures

    Get PDF
    A critical task in the management of Infrastructure as a Service cloud data centers is the placement of Virtual Machines (VMs) over the infrastructure of physical nodes. However, as the size of data centers grows, finding optimal VM placement solutions becomes challenging. The typical approach is to rely on heuristics that improve VM placement scalability by (partially) discarding information about the VM behavior. An alternative approach providing encouraging results, namely Class-Based Placement (CBP), has been proposed recently. CBP considers VMs divided in classes with similar behavior in terms of resource usage. This technique can obtain high quality placement because it considers a detailed model of VM behavior on a per-class base. At the same time, scalability is achieved by considering a small-scale VM placement problem that is replicated as a building block for the whole data center. However, a critical parameter of CBP technique is the number (and size) of building blocks to consider. Many small building blocks may reduce the overall VM placement solution quality due to fragmentation of the physical node resources over blocks. On the other hand, few large building blocks may become computationally expensive to handle and may be unsolvable due to the problem complexity. This paper addresses this problem analyzing the impact of block size on the performance of the VM class-based placement. Furthermore, we propose an algorithm to estimate the best number of blocks. Our proposal is validated through experimental results based on a real cloud computing data center

    Detecting Similarities in Virtual Machine Behavior for Cloud Monitoring using Smoothed Histograms

    Get PDF
    The growing size and complexity of cloud systems determine scalability issues for resource monitoring and management. While most existing solutions con- sider each Virtual Machine (VM) as a black box with independent characteristics, we embrace a new perspective where VMs with similar behaviors in terms of resource usage are clustered together. We argue that this new approach has the potential to address scalability issues in cloud monitoring and management. In this paper, we propose a technique to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. This innovative technique models VMs behavior exploiting the probability histogram of their resources usage, and performs smoothing-based noise reduction and selection of the most relevant information to consider for the clustering process. Through extensive evaluation, we show that our proposal achieves high and stable performance in terms of automatic VM clustering, and can reduce the monitoring requirements of cloud systems

    A comparison of techniques to detect similarities in cloud virtual machines

    Get PDF
    Scalability in monitoring and management of cloud data centres may be improved through the clustering of virtual machines (VMs) exhibiting similar behaviour. However, available solutions for automatic VM clustering present some important drawbacks that hinder their applicability to real cloud scenarios. For example, existing solutions show a clear trade-off between the accuracy of the VMs clustering and the computational cost of the automatic process; moreover, their performance shows a strong dependence on specific technique parameters. To overcome these issues, we propose a novel approach for VM clustering that uses Mixture of Gaussians (MoGs) together with the Kullback-Leiber divergence to model similarity between VMs. Furthermore, we provide a thorough experimental evaluation of our proposal and of existing techniques to identify the most suitable solution for different workload scenarios

    Assessing the overhead and scalability of system monitors for large data centers

    Get PDF
    Current data centers are shifting towards cloud-based architectures as a means to obtain a scalable, cost-effective, robust service platform. In spite of this, the underlying management infrastructure has grown in terms of hardware resources and software complexity, making automated resource monitoring a necessity.There are several infrastructure monitoring tools designed to scale to a very high number of physical nodes. However, these tools either collect performance measure at a low frequency (missing the chance to capture the dynamics of a short-term management task) or are simply not equipped with instrumentation specific to cloud computing and virtualization. In this scenario, monitoring the correctness and efficiency of live migrations can become a nightmare. This situation will only worsen in the future, with the increased service demand due to spreading of the user base.In this paper, we assess the scalability of a prototype monitoring subsystem for different user scenarios. We also identify all the major bottlenecks and give insight on how to remove them

    On private CDNs with off-sourced network infrastructures: A model and a case study

    Get PDF
    The delivery of multimedia contents through a Content Delivery Network (CDN) is typically handled by a specific third party, separated from the content provider. However, in some specific cases, the content provider may be interested in carrying out this function using a Private CDN, possibly using an off-sourced network infrastructure. This scenario poses new challenges and limitations with respect to the typical case of content delivery. First, the systems has to face a different workload as the content consumer are typically part of the same organization that is the content provider. Second, the offsourced nature of the network infrastructure has a major impact on the available choices for CDN design. In this paper we develop an exact mathematical model for the design of a Private CDN addressing the issues and the constraints typical of such scenario. Furthermore, we analyze different heuristics to solve the optimization problem. We apply the proposed model to a real case study and validate the results by means of simulation

    Special issue on algorithms for the resource management of large scale infrastructures

    Get PDF
    Modern distributed systems are becoming increasingly complex as virtualization is being applied at both the levels of computing and networking. Consequently, the resource management of this infrastructure requires innovative and efficient solutions. This issue is further exacerbated by the unpredictable workload of modern applications and the need to limit the global energy consumption. The purpose of this special issue is to present recent advances and emerging solutions to address the challenge of resource management in the context of modern large-scale infrastructures. We believe that the four papers that we selected present an up-to-date view of the emerging trends, and the papers propose innovative solutions to support efficient and self-managing systems that are able to adapt, manage, and cope with changes derived from continually changing workload and application deployment settings, without the need for human supervision

    Dynamic request management algorithms for Web-based services in cloud computing

    Get PDF
    Service providers of Web-based services can take advantage ofmany convenient features of cloud computing infrastructures, but theystill have to implement request management algorithms that are able toface sudden peaks of requests. We consider distributed algorithmsimplemented by front-end servers to dispatch and redirect requests amongapplication servers. Current solutions based on load-blind algorithms, orconsidering just server load and thresholds are inadequate to cope with thedemand patterns reaching modern Internet application servers. In thispaper, we propose and evaluate a request management algorithm, namelyPerformanceGain Prediction, that combines several pieces ofinformation (server load, computational cost of a request, usersession migration and redirection delay) to predict whether theredirection of a request to another server may result in a shorterresponse time. To the best of our knowledge, no other studycombines information about infrastructure status, user requestcharacteristics and redirection overhead for dynamic requestmanagement in cloud computing. Our results showthat the proposed algorithm is able to reduce the responsetime with respect to existing request management algorithmsoperating on the basis of thresholds

    Minimizing computing-plus-communication energy consumptions in virtualized networked data centers

    Get PDF
    In this paper, we propose a dynamic resource provisioning scheduler to maximize the application throughput and minimize the computing-plus-communication energy consumption in virtualized networked data centers. The goal is to maximize the energy-efficiency, while meeting hard QoS requirements on processing delay. The resulting optimal resource scheduler is adaptive, and jointly performs: i) admission control of the input traffic offered by the cloud provider; ii) adaptive balanced control and dispatching of the admitted traffic; iii) dynamic reconfiguration and consolidation of the Dynamic Voltage and Frequency Scaling (DVFS)-enabled virtual machines instantiated onto the virtualized data center. The proposed scheduler can manage changes of the workload without requiring server estimation and prediction of its future trend. Furthermore, it takes into account the most advanced mechanisms for power reduction in servers, such as DVFS and reduced power states. Performance of the proposed scheduler is numerically tested and compared against the corresponding ones of some state-of-the-art schedulers, under both synthetically generated and measured real-world workload traces. The results confirm the delay-vs.-energy good performance of the proposed scheduler

    Parameter Tuning for Scalable Multi-Resource Server Consolidation in Cloud Systems

    Get PDF
    Infrastructure as a Service cloud providers are increasingly relying on scalable and efficient Virtual Machines (VMs) placement as the main solution for reducing unnecessarycosts and wastes of physical resources. However, thecontinuous growth of the size of cloud data centers posesscalability challenges to find optimal placement solutions. The use of heuristics and simplified server consolidation models that partially discard information about the VMs behavior represents the typical approach to guarantee scalability, but at the expense of suboptimal placement solutions. A recently proposed alternative approach, namely Class-Based Placement (CBP), divides VMs in classes with similar behavior in terms of resource usage, and addresses scalability by considering a small-scale server consolidation problem that is replicated as a building block for the whole data center. However, the server consolidation model exploited by the CBP technique suffers from two main limitations. First, it considers only one VM resource (CPU) for the consolidation problem. Second, it does not analyze the impact of the number (and size) of building blocks to consider. Many small building blocks may reduce the overall VMs placement solution quality due to fragmentation of the physical server resources over blocks. On the other hand, few large building blocks may become computationally expensive to handle and may be unsolvable due to the problem complexity. This paper extends the CBP server consolidation model to take into account multiple resources. Furthermore, we analyze the impact of block size on the performance of the proposed consolidation model, and we present and compare multiple strategies to estimate the best number of blocks. Our proposal is validated through experimental results based on a real cloud computing data center
    corecore